cognitive operation
Eliciting Reasoning in Language Models with Cognitive Tools
Ebouky, Brown, Bartezzaghi, Andrea, Rigotti, Mattia
The recent advent of reasoning models like OpenAI's o1 was met with excited speculation by the AI community about the mechanisms underlying these capabilities in closed models, followed by a rush of replication efforts, particularly from the open source community. These speculations were largely settled by the demonstration from DeepSeek-R1 that chains-of-thought and reinforcement learning (RL) can effectively replicate reasoning on top of base LLMs. However, it remains valuable to explore alternative methods for theoretically eliciting reasoning that could help elucidate the underlying mechanisms, as well as providing additional methods that may offer complementary benefits. Here, we build on the long-standing literature in cognitive psychology and cognitive architectures, which postulates that reasoning arises from the orchestrated, sequential execution of a set of modular, predetermined cognitive operations. Crucially, we implement this key idea within a modern agentic tool-calling framework. In particular, we endow an LLM with a small set of "cognitive tools" encapsulating specific reasoning operations, each executed by the LLM itself. Surprisingly, this simple strategy results in considerable gains in performance on standard mathematical reasoning benchmarks compared to base LLMs, for both closed and open-weight models. For instance, providing our "cognitive tools" to GPT-4.1 increases its pass@1 performance on AIME2024 from 32% to 53%, even surpassing the performance of o1-preview. In addition to its practical implications, this demonstration contributes to the debate regarding the role of post-training methods in eliciting reasoning in LLMs versus the role of inherent capabilities acquired during pre-training, and whether post-training merely uncovers these latent abilities.
- Workflow (1.00)
- Research Report > New Finding (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
Duality-based Mode Operations and Pyramid Multilayer Mapping for Rhetorical Modes
Rhetorical modes are useful in both academic and non-academic writing, and can be subjects to be studied within linguistic research and computational modeling. Establishing a conceptual bridge among these domains could enable each to benefit from the others. This paper proposes duality-based mode operations (split-unite, forward-backward, expansion-reduction and orthogonal dualities) to expand the set of rhetorical modes, introducing generated modes like combination and generalization, thereby enhancing epistemic diversity across multiple applications. It further presents a pyramid multilayer mapping framework (e.g., three layers from the rhetorical model layer, to cognitive layer, and to epistemic layers) that reduces the resulting cognitive complexity. The degrees of expressive diversity and complexity reduction are quantified through binomial combinatorics and Shannon entropy analysis. A Marginal Rhetorical Bit (MRB) is identified, permitting the definition of a rhetorical-scalable parameter that measures expressive growth speed in bits per stage. A direct entropy measure shows that hierarchical selection over smaller subsets markedly reduces choice uncertainty compared with flat selection across all modes. These considerations appear to transform static and non-measurable rhetorical taxonomies into more dynamic and more measurable systems for discourse design. From this work, it would be possible to identify a pathway for future AI systems to operate not only on language tokens but on layered rhetorical reasoning structures, bridging linguistic, pedagogical, academic, and computational research
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Wisconsin (0.04)
- (6 more...)
Sequence models for by-trial decoding of cognitive strategies from neural data
Otter, Rick den, Weindel, Gabriel, Stuit, Sjoerd, van Maanen, Leendert
Understanding the sequence of cognitive operations that underlie decision-making is a fundamental challenge in cognitive neuroscience. Traditional approaches often rely on group-level statistics, which obscure trial-by-trial variations in cognitive strategies. In this study, we introduce a novel machine learning method that combines Hidden Multivariate Pattern analysis with a Structured State Space Sequence model to decode cognitive strategies from electroencephalography data at the trial level. We apply this method to a decision-making task, where participants were instructed to prioritize either speed or accuracy in their responses. Our results reveal an additional cognitive operation, labeled Confirmation, which seems to occur predominantly in the accuracy condition but also frequently in the speed condition. The modeled probability that this operation occurs is associated with higher probability of responding correctly as well as changes of mind, as indexed by electromyography data. By successfully modeling cognitive operations at the trial level, we provide empirical evidence for dynamic variability in decision strategies, challenging the assumption of homogeneous cognitive processes within experimental conditions. Our approach shows the potential of sequence modeling in cognitive neuroscience to capture trial-level variability that is obscured by aggregate analyses. The introduced method offers a new way to detect and understand cognitive strategies in a data-driven manner, with implications for both theoretical research and practical applications in many fields.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Cognitive Prompts Using Guilford's Structure of Intellect Model
Large language models (LLMs) demonstrate strong language generation capabilities but often struggle with structured reasoning, leading to inconsistent or suboptimal problem-solving. To mitigate this limitation, Guilford's Structure of Intellect (SOI) model - a foundational framework from intelligence theory - is leveraged as the basis for cognitive prompt engineering. The SOI model categorizes cognitive operations such as pattern recognition, memory retrieval, and evaluation, offering a systematic approach to enhancing LLM reasoning and decision-making. This position paper presents a novel cognitive prompting approach for enforcing SOI-inspired reasoning for improving clarity, coherence, and adaptability in model responses.
Unlocking Structured Thinking in Language Models with Cognitive Prompting
We propose cognitive prompting as a novel approach to guide problem-solving in large language models (LLMs) through structured, human-like cognitive operations, such as goal clarification, decomposition, filtering, abstraction, and pattern recognition. By employing systematic, step-by-step reasoning, cognitive prompting enables LLMs to tackle complex, multi-step tasks more efficiently. We introduce three variants: a deterministic sequence of cognitive operations, a self-adaptive variant in which the LLM dynamically selects the sequence of cognitive operations, and a hybrid variant that uses generated correct solutions as few-shot chain-of-thought prompts. Experiments with LLaMA, Gemma 2, and Qwen models in each two sizes on the arithmetic reasoning benchmark GSM8K demonstrate that cognitive prompting significantly improves performance compared to standard question answering.
Pinaki Laskar on LinkedIn: #artificialintelligence #consciousness #platformtheory
AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Are animals or #artificialintelligence capable of having consciousness? How and where the brain generates #consciousness, The new concept describes consciousness as a state that is tied to complex cognitive operations and not as a passive basic state that automatically prevails when we are awake. There is no consensus regarding the meaning of the term consciousness. The definitions are exceptionally broad and range from the simple processing of sensory information (perceptions), memory related cognitive functions (mental time travel) and meta-cognitions (awareness for self, others, time, and things). The current definitions of different states of consciousness by neuro- and electro-physiological approaches are even less helpful in trying to corner the volatile phenomenon of consciousness.
Consciousness in Humans, Animals and Artificial Intelligence - Neuroscience News
Summary: A new theory suggests consciousness is a state tied to complex cognitive operations, and not a passive basic state that automatically prevails when we are awake. Two researchers at Ruhr-Universität Bochum (RUB) have come up with a new theory of consciousness. They have long been exploring the nature of consciousness, the question of how and where the brain generates consciousness, and whether animals also have consciousness. The new concept describes consciousness as a state that is tied to complex cognitive operations – and not as a passive basic state that automatically prevails when we are awake. Professor Armin Zlomuzica from the Behavioral and Clinical Neuroscience research group at RUB and Professor Ekrem Dere, formerly at Université Paris-Sorbonne, now at RUB, describe their theory in the journal Behavioural Brain Research.
New theory of consciousness in humans, animals and artificial intelligence
Two researchers at Ruhr-Universität Bochum (RUB) have come up with a new theory of consciousness. They have long been exploring the nature of consciousness, the question of how and where the brain generates consciousness, and whether animals also have consciousness. The new concept describes consciousness as a state that is tied to complex cognitive operations--and not as a passive basic state that automatically prevails when we are awake. Professor Armin Zlomuzica from the Behavioral and Clinical Neuroscience research group at RUB and Professor Ekrem Dere, formerly at Université Paris-Sorbonne, now at RUB, describe their theory in the journal Behavioural Brain Research. The printed version will be published on 15 February 2022, the online article has been available since November 2021.
Consciousness in humans, animals and artificial intelligence
Two researchers at Ruhr-Universität Bochum (RUB) have come up with a new theory of consciousness. They have long been exploring the nature of consciousness, the question of how and where the brain generates consciousness, and whether animals also have consciousness. The new concept describes consciousness as a state that is tied to complex cognitive operations – and not as a passive basic state that automatically prevails when we are awake. Professor Armin Zlomuzica from the Behavioral and Clinical Neuroscience research group at RUB and Professor Ekrem Dere, formerly at Université Paris-Sorbonne, now at RUB, describe their theory in the journal Behavioural Brain Research. The printed version will be published on 15 February 2022, the online article has been available since November 2021.
Evolutionary Language Games as a Paradigm for Integrated AI Research
Steels, Luc L (ICREA - IBE, Barcelona and Sony Computer Science Lab, Paris)
Evolutionary language games are a way to study how perceptions, concepts, and language can emerge in populations of situated embodied agents, driven by the needs of communication and the properties of the environment. Evolutionary language games are currently being investigated using physical robots and this then requires that the full cycle of processing activities from physical robotic embodiment to sensory-motor processing, visual perception and action, conceptualization, and language processing are all integrated in a single system. This contribution reports on a large-scale long term effort to experiment with evolutionary language games and discusses major results achieved so far.